captcha system
A Hybrid CAPTCHA Combining Generative AI with Keystroke Dynamics for Enhanced Bot Detection
Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs) are a foundational component of web security, yet traditional implementations suffer from a trade-off between usability and resilience against AI-powered bots. This paper introduces a novel hybrid CAPTCHA system that synergizes the cognitive challenges posed by Large Language Models (LLMs) with the behavioral biometric analysis of keystroke dynamics. Our approach generates dynamic, unpredictable questions that are trivial for humans but non-trivial for automated agents, while simultaneously analyzing the user's typing rhythm to distinguish human patterns from robotic input. We present the system's architecture, formalize the feature extraction methodology for keystroke analysis, and report on an experimental evaluation. The results indicate that our dual-layered approach achieves a high degree of accuracy in bot detection, successfully thwarting both paste-based and script-based simulation attacks, while maintaining a high usability score among human participants. This work demonstrates the potential of combining cognitive and behavioral tests to create a new generation of more secure and user-friendly CAPTCHAs.
Aura-CAPTCHA: A Reinforcement Learning and GAN-Enhanced Multi-Modal CAPTCHA System
Chandra, Joydeep, Manhas, Prabal, Kaur, Ramanjot, Sahay, Rashi
Aura-CAPTCHA was developed as a multi-modal CAPTCHA system to address vulnerabilities in traditional methods that are increasingly bypassed by AI technologies, such as Optical Character Recognition (OCR) and adversarial image processing. The design integrated Generative Adversarial Networks (GANs) for generating dynamic image challenges, Reinforcement Learning (RL) for adaptive difficulty tuning, and Large Language Models (LLMs) for creating text and audio prompts. Visual challenges included 3x3 grid selections with at least three correct images, while audio challenges combined randomized numbers and words into a single task. RL adjusted difficulty based on incorrect attempts, response time, and suspicious user behavior. Evaluations on real-world traffic demonstrated a 92% human success rate and a 10% bot bypass rate, significantly outperforming existing CAPTCHA systems. The system provided a robust and scalable approach for securing online applications while remaining accessible to users, addressing gaps highlighted in previous research.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.89)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.88)
mCaptcha: Replacing Captchas with Rate Limiters to Improve Security and Accessibility
For many years, publicly accessible Web applications have been protecting their services from bots and scripts by asking users to solve captchas (Completely Automated Public Turing tests to tell Computers and Humans Apart), puzzles designed to be challenging for machines to solve yet simple for humans, such as clicking on certain locations in an image or recognizing elongated characters or digits. Designed to stop robotic assaults like spamming, data scraping, and brute-force login attempts,1 captchas act as a security precaution to determine whether a user is a human or a software program. Captcha techniques are employed in many different areas, including e-transactions, entering a website's secure areas, gathering email signups, and ensuring that only humans vote when conducting polls and surveys. They are also used to hinder attackers and spammers from injecting malicious software into online registration forms. As such, captchas are also employed as a line of defense against threats such as DDoS attacks, dictionary attacks, malvertising, and botnet and spam attacks.
- North America > United States > Arizona (0.05)
- Asia > China (0.05)
GOTCHA– A CAPTCHA System for Live Deepfakes
New research from New York University adds to the growing indications that we may soon have to take the deepfake equivalent of a'drunk test' in order to authenticate ourselves, before commencing a sensitive video call – such as a work-related videoconference, or any other sensitive scenario that may attract fraudsters using real-time deepfake streaming software. Some of the active and passive challenges applied to video-call scenarios in GOTCHA. The user must comply with and pass the challenges, while additional'passive' methods (such as attempting to overload a potential deepfake system) are used over which the participant has no influence. The proposed system is titled GOTCHA – a tribute to the CAPTCHA systems that have become an increasing obstacle to web-browsing over the last 10-15 years, wherein automated systems require the user to perform tasks that machines are bad at, such as identifying animals or deciphering garbled text (and, ironically, these challenges often turn the user into a free AMT-style outsourced annotator). In essence, GOTCHA extends the August 2022 DF-Captcha paper from Ben-Gurion University, which was the first to propose making the person at the other end of the call jump through a few visually semantic hoops in order to prove their authenticity.
- North America > United States > New York (0.25)
- North America > United States > California > San Diego County > San Diego (0.06)
Breaking CAPTCHA Using Machine Learning in 0.05 Seconds
Everyone despises CAPTCHAs (humans, since bots do not have emotions) -- Those annoying images containing hard to read the text, which you have to type in before you can access or do "something" online. CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) were developed to prevent automatized programs from being mischievous (filling out online forms, accessing restricted files, accessing a website an incredible amount of times, and others) on the world wide web, by verifying that the end-user is "human" and not a bot. Nevertheless, several attacks on CAPTCHAs have been proposed in the past, but none has been as accurate and fast as the machine learning algorithm presented by a group of researchers from Lancaster University, Northwest University, and Peking University showed below. One of the first known people to break CAPTCHAs was Adrian Rosebrock, who, in his book "Deep Learning for Computer Vision with Python," [4] Adrian goes through how he bypassed the CAPTCHA systems on the E-ZPass New York website using machine learning, where he used deep learning to train his model by downloading a large image dataset of CAPTCHA examples to break the CAPTCHA systems. The main difference between Adrian's solution and the solution from the research scientists from Lancaster, Northwest, and Peking is that the researchers did not need to download a large dataset of images to break the CAPTCHAs system, au contraire, they used the concept of a generative adversarial network (GAN) to create synthesized CAPTCHAs, along with a small dataset of real CAPTCHAs to create an extremely fast and accurate CAPTCHA solver.
Breaking CAPTCHA Using Machine Learning in 0.05 Seconds
Everyone despises CAPTCHAs (humans, since bots do not have emotions) -- Those annoying images containing hard to read the text, which you have to type in before you can access or do "something" online. CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart) were developed to prevent automatized programs from being mischievous (filling out online forms, accessing restricted files, accessing a website an incredible amount of times, and others) on the world wide web, by verifying that the end-user is "human" and not a bot. Nevertheless, several attacks on CAPTCHAs have been proposed in the past, but none has been as accurate and fast as the machine learning algorithm presented by a group of researchers from Lancaster University, Northwest University, and Peking University showed below. One of the first known people to break CAPTCHAs was Adrian Rosebrock, who, in his book "Deep Learning for Computer Vision with Python," [4] Adrian goes through how he bypassed the CAPTCHA systems on the E-ZPass New York website using machine learning, where he used deep learning to train his model by downloading a large image dataset of CAPTCHA examples in order to break the CAPTCHA systems. The main difference between Adrian's solution and the solution from the research scientists from Lancaster, Northwest, and Peking, is that the researchers did not need to download a large dataset of images to break the CAPTCHAs system, au contraire, they used the concept of a generative adversarial network (GAN) to create synthesized CAPTCHAs, along with a small dataset of real CAPTCHAs to create an extremely fast and accurate CAPTCHA solver.
How hackers use machine learning to breach cybersecurity!
From a technical standpoint, machine learning is a field where absolute cybersecurity is impossible! It does not promise to completely protect the confidentiality, integrity, and availability of data and networks but instead offers practical ways to reduce the scale of attacks and improve the security level to a great extent. One reason why we cannot entirely prevent cybersecurity threats in machine learning is that cyber attackers themselves are adopting the same technology for attacks, which include malware and phishing, spam, DDoS, ransomware, spyware, etc. Besides, the offensive capabilities are much cheaper and easier to develop and deploy than the necessary defensive measures. The use of AI-powered malicious apps in massive cyberattacks increases the speed, adaptability, agility, coordination, and even sophistication of the attacks on a large population of networks and devices. By using supervised and unsupervised learning, these malicious programs can hide within a victim's system, and generate credentials to infiltrate devices by automatically cycling through password and username options at a speed faster than a human could test. They can self-learn how and when to attack their target system and be able to evade defensive measures through self-initiated changes in signature and behavior at the event of a counterattack.
- North America > United States > New York (0.05)
- Europe > United Kingdom (0.05)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
How Does Artificial Intelligence Really Work in Agriculture?
How far are we from a computer being able to decide what variety or hybrid should be planted in a field, how it is fertilized, and prescribe crop protection chemicals needed? The promise of artificial intelligence (AI) has been a popular discussion in the media, not only for agriculture but for a variety of applications. But what is AI, how does it work in farming plus what does it mean and how does it impact agriculture? One of the most common ways this new technology is discussed for use in agriculture is helping to make seed selection recommendations for individual fields. How can a computer be programmed to know what seed to plant?
How to break a CAPTCHA system in 15 minutes with Machine Learning
CAPTCHAs were designed to prevent computers from automatically filling out forms by verifying that you are a real person. But with the rise of deep learning and computer vision, they can now often be defeated easily. I've been reading the excellent book Deep Learning for Computer Vision with Python by Adrian Rosebrock. Adrian didn't have access to the source code of the application generating the CAPTCHA image. To break the system, he had to download hundreds of example images and manually solve them to train his system.
AI Bot That Mimics the Human Eye Breaks reCAPTCHAs With 66.6% Accuracy
Computer scientists have created an AI algorithm that works on the same principles of the human eye, and that can break various CAPTCHA systems with accuracies of over 50%. More specifically, this new system solved Google reCAPTCHAs with 66.6% accuracy, BotDetect with 64.4%, Yahoo with 57.4%, and PayPal image challenges with 57.1%. For the record, any CAPTCHA system that allows automated systems to break it with an accuracy of over 1% is considered broken. The 12-man research crew designed their AI algorithm to go through the same steps a human eye and brain go through when viewing an image. There are algorithm components that recognize the edges of shapes, a component that categorizes the shape, one that takes into account the angle at which an observer is looking at the shape, and then a component attempts to match the shape with a standard form of a letter or number (usually stored in the AI as a Georgia font character).